skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Connolly, Andrew J"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Progress in machine learning and artificial intelligence promises to advance research and understanding across a wide range of fields and activities. In tandem, increased awareness of the importance of open data for reproducibility and scientific transparency is making inroads in fields that have not traditionally produced large publicly available datasets. Data sharing requirements from publishers and funders, as well as from other stakeholders, have also created pressure to make datasets with research and/or public interest value available through digital repositories. However, to make the best use of existing data, and facilitate the creation of useful future datasets, robust, interoperable and usable standards need to evolve and adapt over time. The open-source development model provides significant potential benefits to the process of standard creation and adaptation. In particular, data and meta-data standards can use long-standing technical and socio-technical processes that have been key to managing the development of software, and which allow incorporating broad community input into the formulation of these standards. On the other hand, open-source models carry unique risks that need to be considered. This report surveys existing open-source standards development, addressing these benefits and risks. It outlines recommendations for standards developers, funders and other stakeholders on the path to robust, interoperable and usable open-source data and metadata standards. 
    more » « less
  2. Abstract The boundary of solar system object discovery lies in detecting its faintest members. However, their discovery in detection catalogs from imaging surveys is fundamentally limited by the practice of thresholding detections at signal-to-noise (SNR) ≥ 5 to maintain catalog purity. Faint moving objects can be recovered from survey images using the shift-and-stack algorithm, which coadds pixels from multi-epoch images along a candidate trajectory. Trajectories matching real objects accumulate signal coherently, enabling high-confidence detections of very faint moving objects. Applying shift-and-stack comes with high computational cost, which scales with target object velocity, typically limiting its use to searches for slow-moving objects in the outer solar system. This work introduces a modified shift-and-stack algorithm that trades sensitivity for speedup. Our algorithm stacks low-SNR detection catalogs instead of pixels, the sparsity of which enables approximations that reduce the number of stacks required. Our algorithm achieves real-world speedups of 10–103× over image-based shift-and-stack while retaining the ability to find faint objects. We validate its performance by recovering synthetic inner and outer solar system objects injected into images from the DECam Ecliptic Exploration Project. Exploring the sensitivity–compute time trade-off of this algorithm, we find that our method achieves a speedup of ∼30× with 88% of the memory usage while sacrificing 0.25 mag in depth compared to image-based shift-and-stack. These speedups enable the broad application of shift-and-stack to large-scale imaging surveys and searches for faint inner solar system objects. We provide a reference implementation via thefind-asteroidsPython package and this URL:https://github.com/stevenstetzler/find-asteroids. 
    more » « less
    Free, publicly-accessible full text available November 26, 2026
  3. Abstract Trans-Neptunian objects provide a window into the history of the solar system, but they can be challenging to observe due to their distance from the Sun and relatively low brightness. Here we report the detection of 75 moving objects that we could not link to any other known objects, the faintest of which has a VR magnitude of 25.02 ± 0.93 using the Kernel-Based Moving Object Detection (KBMOD) platform. We recover an additional 24 sources with previously known orbits. We place constraints on the barycentric distance, inclination, and longitude of ascending node of these objects. The unidentified objects have a median barycentric distance of 41.28 au, placing them in the outer solar system. The observed inclination and magnitude distribution of all detected objects is consistent with previously published KBO distributions. We describe extensions to KBMOD, including a robust percentile-based lightcurve filter, an in-line graphics-processing unit filter, new coadded stamp generation, and a convolutional neural network stamp filter, which allow KBMOD to take advantage of difference images. These enhancements mark a significant improvement in the readiness of KBMOD for deployment on future big data surveys such as LSST. 
    more » « less
  4. Abstract We present a detailed study of the observational biases of the DECam Ecliptic Exploration Project’s B1 data release and survey simulation software that enables direct statistical comparisons between models and our data. We inject a synthetic population of objects into the images, and then subsequently recover them in the same processing as our real detections. This enables us to characterize the survey’s completeness as a function of apparent magnitudes and on-sky rates of motion. We study the statistically optimal functional form for the magnitude, and develop a methodology that can estimate the magnitude and rate efficiencies for all survey’s pointing groups simultaneously. We have determined that our peak completeness is on average 80% in each pointing group, and our magnitude drops to 25% of this value atm25= 26.22. We describe the freely available survey simulation software and its methodology. We conclude by using it to infer that our effective search area for objects at 40 au is 14.8 deg2, and that our lack of dynamically cold distant objects means that there at most 8 × 103objects with 60 <a< 80 au and absolute magnitudesH≤ 8. 
    more » « less
  5. Abstract We present the first set of trans-Neptunian objects (TNOs) observed on multiple nights in data taken from the DECam Ecliptic Exploration Project. Of these 110 TNOs, 105 do not coincide with previously known TNOs and appear to be new discoveries. Each individual detection for our objects resulted from a digital tracking search at TNO rates of motion, using two-to-four-hour exposure sets, and the detections were subsequently linked across multiple observing seasons. This procedure allows us to find objects with magnitudesmVR≈ 26. The object discovery processing also included a comprehensive population of objects injected into the images, with a recovery and linking rate of at least 94%. The final orbits were obtained using a specialized orbit-fitting procedure that accounts for the positional errors derived from the digital tracking procedure. Our results include robust orbits and magnitudes for classical TNOs with absolute magnitudesH∼ 10, as well as a dynamically detached object found at 76 au (semimajor axisa≈ 77 au). We find a disagreement between our population of classical TNOs and the CFEPS-L7 three-component model for the Kuiper Belt. 
    more » « less